-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
4040 docker openshift #4168
4040 docker openshift #4168
Conversation
The idea is that we won't mess with the kick-the-tires for a bit so that people can play with Minishift without that tag constantly changing under their feet while we work on other efforts in other tags such as getting the containers to run as non-root and getting them to run in 1 GB of memory.
Back out of the "kick the tires" story of using OpenShift Online instead of Vagrant because Dataverse won't fit in the free tier which limits your application to 1 GB of total memory. PostgreSQL and Solr each seem to run fine in 256 MB of memory but Dataverse/Glassfish can't run in 512MB of memory. That said, logic has been added to the installer to check if we're running in Docker and how much memory we have. We don't change the Glassfish heap size based on this value, however, because the war files fails to deploy.
Conflicts (just a trailing character: doc/sphinx-guides/source/installation/config.rst
} | ||
}, | ||
"spec": { | ||
"containers": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still really want to see these split out into separate pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danmcp can you please link me to an example that's closer to what you want? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pdurbin In this example:
There is the mysql container inside its deployment config:
And here is the ruby container inside its deployment config:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danmcp thanks. I made myself a couple screenshots so I can better see what you're talking about. It sounds like you want more than one DeploymentConfig
entries, each with a single container like this:
What I'm doing right now is only using a single DeploymentConfig
and putting all three of my containers in it like this:
@danmcp is there a way to have one DeploymentConfig
run before the other? Right now the Glassfish/Dataverse container restarts until the PostgreSQL and Solr containers are available.
#COPY ./init-dataverse /root/dvinstall | ||
#COPY ./setup-all.sh /root/dvinstall | ||
#COPY ./setup-irods.sh /root/dvinstall | ||
COPY ./Dockerfile / |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't be necessary to copy the Dockerfile into the image.
@pdurbin You can't force order other than using an external orchestrator. But generally you want to avoid this sort of methodology because you would end up in the same state if everything had to be restarted or redeployed. It's better to build your components in a way that doesn't depend on a particular order. One way to do this is with an init container. You would add one to the glassfish container and it will wait for the DB to be available before it exits. At which point glassfish would start up. https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per our conversation earlier, the scripts look good. Once @matthew-a-dunlap has added his doc change it should be ready for QA.
Use "developer" as the username and a couple characters as the password. | ||
|
||
Allow Containers to Run as Root in Minishift | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops! This whole section can be removed. The containers run as non-root as of b84526c
The following curl command is expected to fail until you "expose" the HTTP service. | ||
|
||
``curl http://dataverse-glassfish-service-project1.192.168.99.102.nip.io/api/info/version`` | ||
``curl http://dataverse-glassfish-service-project1.192.168.99.100.nip.io/api/info/version`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I think @pameyer warned me that these IP addresses will change.
@@ -435,7 +441,7 @@ Make Sure the Dataverse API is Working | |||
|
|||
This should show a version number: | |||
|
|||
``curl http://dataverse-glassfish-service-project1.192.168.99.102.nip.io/api/info/version`` | |||
``curl http://dataverse-glassfish-service-project1.192.168.99.100.nip.io/api/info/version`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -444,7 +450,7 @@ Log into Minishift and Visit Dataverse in your Browser | |||
- username: developer | |||
- password: developer | |||
|
|||
Visit https://192.168.99.100:8443/console/project/project1/browse/routes and click http://dataverse-glassfish-service-project1.192.168.99.100.nip.io/ or whatever is shows. This assumes you named your project ``project1``. | |||
Visit https://192.168.99.100:8443/console/project/project1/browse/routes and click http://dataverse-glassfish-service-project1.192.168.99.100.nip.io/ or whatever is shows under "Routes External Traffic". This assumes you named your project ``project1``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This says 100 also. I think we should back out of the changes to 102 above and just say that the IP will vary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, duh. It looks like @matthew-a-dunlap was making this IP consistently "100", which is great. I added notes that it can vary in 25cfb30
Related Issues
Pull Request Checklist